2024-03-07 01:31:38
Public Trust In AI Is Sinking Across the Board - Slashdot https://tech.slashdot.org/story/24/03/06/2116226/public-trust-in-ai-is-sinking-across-the-board
Public Trust In AI Is Sinking Across the Board - Slashdot https://tech.slashdot.org/story/24/03/06/2116226/public-trust-in-ai-is-sinking-across-the-board
Public Trust In AI Is Sinking Across the Board - Slashdot https://tech.slashdot.org/story/24/03/06/2116226/public-trust-in-ai-is-sinking-across-the-board
CAKE: Sharing Slices of Confidential Data on Blockchain
Edoardo Marangone, Michele Spina, Claudio Di Ciccio, Ingo Weber
https://arxiv.org/abs/2405.04152 ht…
Verifiable by Design: Aligning Language Models to Quote from Pre-Training Data
Jingyu Zhang, Marc Marone, Tianjian Li, Benjamin Van Durme, Daniel Khashabi
https://arxiv.org/abs/2404.03862
At this point, to keep any semblance of privacy, you basically have the choice between Apple (easier and smoother but you need to trust them) or open-source solutions (rough around the edges, requires time & technical knowledge).
But even if you do, if you communicate with other people THEY will be likely harvested for the data you share with them and they share with you.
So there’s little escape.
For he makes the bars of your gates strong. He blesses your children within you. Psalm 147.13 NET.
Blessed are the people who trust in the Lord.
#Bible #BibleVerses #Psalms
FEDQ-Trust: Efficient Data-Driven Trust Prediction for Mobile Edge-Based IoT Systems
Jiahui Bai, Hai Dong, Athman Bouguettaya
https://arxiv.org/abs/2404.18356
Do You Trust Your Model? Emerging Malware Threats in the Deep Learning Ecosystem
Dorjan Hitaj, Giulio Pagnotta, Fabio De Gaspari, Sediola Ruko, Briland Hitaj, Luigi V. Mancini, Fernando Perez-Cruz
https://arxiv.org/abs/2403.03593
MGA: When it comes to trust into citizen science, majority of policy makers had inherent trust into CS-generated data. Have confidence in their own community members. #ecsa2024
This https://arxiv.org/abs/2211.13715 has been replaced.
link: https://scholar.google.com/scholar?q=a
“You’re storing the user data in a place that syncs with iCloud! Apple is an evil megacorp and they can’t be trusted!”
It’s an iOS app, genius.
Trust doesn’t work like “I trust *evil* Apple to definitely not spy on my inputs, but I TOTALLY don’t trust them with the data that those inputs create.”
You concede to trusting them enough, or you don’t.
If you don’t, that’s fine (just don’t use them), but your threat model is dumb if you only trust them for the worst part.…
This https://arxiv.org/abs/2403.01932 has been replaced.
link: https://scholar.google.com/scholar?q=a
Holy shit, WordPress.com/Automattic is really trying to destroy all the goodwill it has built over years.
Selling user posts to "AI" companies is such a breach of trust and an insult. It negates any respect for user contribution that is the basis for a good and sustainable relationship.
https://mastod…
🙀 Data Analysis Doesn't Have to Be Scary! 📊
Are you one of those people who breaks out in a cold sweat at the mere mention of Pivot Tables? I get it! They can seem super intimidating at first. But trust me, they're powerful tools that can transform the way you work with data.
That's why I made this new video tutorial all about breaking down Pivot Tables and making them approachable. In it, I'll guide you step-by-step through creating your own interactive Sales D…
A Universe of Sound - processing NASA data into sonifications to explore participant response: #sonified NASA data of three astronomical objects presented as aural visualizations, then surveyed blind or low-vision and sighted individuals to elicit feedback on the experience of these pieces as it relates to enjoyment, education, and trust of the scientific data."
This https://arxiv.org/abs/2211.13715 has been replaced.
link: https://scholar.google.com/scholar?q=a
This is how different sleep tracking is with Apple and Garmin watches: https://alpo.gitlab.io/jots/posts/2024/04/sleep-tracking-why-data-from-apple-and-garmin-watches-is-so-different/
If you are in the sam…
Guided By AI: Navigating Trust, Bias, and Data Exploration in AI-Guided Visual Analytics
Sunwoo Ha, Shayan Monadjemi, Alvitta Ottley
https://arxiv.org/abs/2404.14521
Collaborative Active Learning in Conditional Trust Environment
Zan-Kai Chong, Hiroyuki Ohsaki, Bryan Ng
https://arxiv.org/abs/2403.18436 https://
“Customer trust is a priority for (our PR office) and we are actively evaluating our privacy processes and (how we don’t get caught doing this again in the future).” - General Motors*
*And, soon, every other automaker when similarly confronted.
https://www.nytimes.com/2024/03/22/…
“Trust in AI technology and the companies that develop it is dropping, in both the U.S. and around the world, according to new data from Edelman [...] Globally, trust in AI companies has dropped to 53%, down from 61% five years ago. In the U.S., trust has dropped 15 percentage points (from 50% to 35%) over the same period.”
https:…
This https://arxiv.org/abs/2401.16643 has been replaced.
initial toot: https://mastoxiv.page/@arXiv_csIT_…
Oh this has to feature in a @… thread, surely :)
Meta needs competitive data
Meta mods VPN to sniff traffic
Wow.
https://mastodon.social/@dangillmor/11
Before you head out for the weekend, check out today's Metacurity for the top infosec developments you should know, including
--North Korea's Lazarus Group returns to Tornado Cash
--US Attorney to seize $2.3m for pig butchering victims,
--Hackers stole data on 43m from French unemployment agency,
--FTC fines two fraudulent tech support firms,
--Attackers stole $2m from a single crypto investor,
--FCC approves Cyber Trust Mark labels,
--Florida man sues GM and LexisNexis for collecting car data,
--Zscaler buys Avalor for $350m,
--much more
https://www.metacurity.com/p/north-koreas-lazarus-group-returns-tornado-cash
Large Language Models and User Trust: Focus on Healthcare
Avishek Choudhury, Zaria Chaudhry
https://arxiv.org/abs/2403.14691 https://…
SmartQC: An Extensible DLT-Based Framework for Trusted Data Workflows in Smart Manufacturing
Alan McGibney, Tharindu Ranathunga, Roman Pospisil
https://arxiv.org/abs/2402.17868
Pipeline Provenance for Analysis, Evaluation, Trust or Reproducibility
Michael A. C. Johnson, Hans-Rainer Kl\"ockner, Albina Muzafarova, Kristen Lackeos, David J. Champion, Marta Dembska, Sirko Schindler, Marcus Paradies
https://arxiv.org/abs/2404.14378
This https://arxiv.org/abs/2211.13715 has been replaced.
link: https://scholar.google.com/scholar?q=a
User Characteristics in Explainable AI: The Rabbit Hole of Personalization?
Robert Nimmo, Marios Constantinides, Ke Zhou, Daniele Quercia, Simone Stumpf
https://arxiv.org/abs/2403.00137
This https://arxiv.org/abs/2212.06540 has been replaced.
link: https://scholar.google.com/scholar?q=a
"Microsoft’s “security culture [is] inadequate and requires an overhaul, particularly in light of the company’s centrality in the technology ecosystem and the level of trust customers place in the company to protect their data and operations.”"
Microsoft is 'ground zero' for state-sponsored hackers, exec says
"The worst part of it... is the lack of understanding about what privacy means, while telling their users they are super serious about it. Add to that the CEO’s 'trust me, bro' attitude, their deals with the shady and homophobic crypto company Brave, and many other things, and the conclusion is that, no, your data is not safe at Kagi at all, and with their primary business being 'AI' and not search, you know exactly what that means. Do not use Kagi."
Waiting in the doctor's office.
What better to do with the time than reading through @… 's "third-party trust model
for direct personal data
transfers" report?
https://…
«[#CitizenScience] Project leaders who described misalignment between their own goals and what they perceived to be their organization’s goals more frequently reported challenges related to balancing scientists’ and volunteers’ interests, convincing colleagues to trust data, and being part-time employees» https://www.tandfonline.com/doi/full/10.1080/08941920.2024.2329914
Interpretable Prediction and Feature Selection for Survival Analysis
Mike Van Ness, Madeleine Udell
https://arxiv.org/abs/2404.14689 https://
Pipeline Provenance for Analysis, Evaluation, Trust or Reproducibility
Michael A. C. Johnson, Hans-Rainer Kl\"ockner, Albina Muzafarova, Kristen Lackeos, David J. Champion, Marta Dembska, Sirko Schindler, Marcus Paradies
https://arxiv.org/abs/2404.14378
This https://arxiv.org/abs/2401.02306 has been replaced.
initial toot: https://mastoxiv.page/@arXiv_ees…
This https://arxiv.org/abs/2402.07614 has been replaced.
initial toot: https://mastoxiv.page/@arXiv_mat…
Situations like this are a prime example of why we have to end the death penalty. As long as you have fallible human beings imposing an irreversible sentence on people, you're going to have errors and abuse.
#justicereform #deathpenalty
Enhancing Trust and Privacy in Distributed Networks: A Comprehensive Survey on Blockchain-based Federated Learning
Ji Liu, Chunlu Chen, Yu Li, Lin Sun, Yulun Song, Jingbo Zhou, Bo Jing, Dejing Dou
https://arxiv.org/abs/2403.19178
My current take on the #xz situation, not having read the actual source backdoor commits yet (thanks a lot #Github for hiding the evidence at this point...) besides reading what others have written about it (cf. #rustlang for such central library dependencies would maybe (really big maybe) have made it a bit harder to push a backdoor like this because - if and only if the safety features are used idiomatically in an open source project - reasonably looking code is (a bit?) more limited in the sneaky behavior it could include. We should still very much use those languages over C/C for infrastructure code because the much larger class of unintentional bugs is significantly mitigated, but I believe (without data to back it up) that even such "bugdoor" type changes will be harder to execute. However, given the sophistication in this case, it may not have helped at all. The attacker(s) have shown to be clever enough.
6. Sandboxing library code may have helped - as the attacker(s) explicitly disabled e.g. landlock, that might already have had some impact. We should create better tooling to make it much easier to link to infrastructure libraries in a sandboxed way (although that will have performance implications in many cases).
7. Automatic reproducible builds verification would have mitigated this particular vector of backdoor distribution, and the Debian team seems to be using the reproducibility advances of the last decade to verify/rebuild the build servers. We should build library and infrastructure code in a fully reproducible manner *and* automatically verify it, e.g. with added transparency logs for both source and binary artefacts. In general, it does however not prevent this kind of supply chain attack that directly targets source code at the "leaf" projects in Git commits.
8. Verifying the real-life identity of contributors to open source projects is hard and a difficult trade-off. Something similar to the #Debian #OpenPGP #web-of-trust would potentially have mitigated this style of attack somewhat, but with a different trade-off. We might have to think much harder about trust in individual accounts, and for some projects requiring a link to a real-world country-issued ID document may be the right balance (for others it wouldn't work). That is neither an easy nor a quick path, though. Also note that sophisticated nation state attackers will probably not have a problem procuring "good" fake IDs. It might still raise the bar, though.
9. What happened here seems clearly criminal - at least under my IANAL naive understanding of EU criminal law. There was clear intent to cause harm, and that makes the specific method less important. The legal system should also be able to help in mitigating supply chain attacks; not in preventing them, but in making them more costly if attackers can be tracked down (this is difficult in itself, see point 8) and face risk of punishment after the fact.
H/T @… @… @… @… @…
Would You Trust an AI Doctor? Building Reliable Medical Predictions with Kernel Dropout Uncertainty
Ubaid Azam, Imran Razzak, Shelly Vishwakarma, Hakim Hacid, Dell Zhang, Shoaib Jameel
https://arxiv.org/abs/2404.10483
Enabling Zero Trust Security in IoMT Edge Network
Maha Ali Allouzi, Javed Khan
https://arxiv.org/abs/2402.10389 https://arxiv.org/pdf…
Riemannian trust-region methods for strict saddle functions with complexity guarantees
Florentin Goyens, Cl\'ement Royer
https://arxiv.org/abs/2402.07614
This https://arxiv.org/abs/2401.16643 has been replaced.
initial toot: https://mastoxiv.page/@arXiv_csIT_…
A Survey of Explainable Knowledge Tracing
Yanhong Bai, Jiabao Zhao, Tingjiang Wei, Qing Cai, Liang He
https://arxiv.org/abs/2403.07279 https://
Enabling Zero Trust Security in IoMT Edge Network
Maha Ali Allouzi, Javed Khan
https://arxiv.org/abs/2402.10389 https://arxiv.org/pdf…
My current take on the #xz situation, not having read the actual source backdoor commits yet (thanks a lot #Github for hiding the evidence at this point...) besides reading what others have written about it (cf. #rustlang for such central library dependencies would maybe (really big maybe) have made it a bit harder to push a backdoor like this because - if and only if the safety features are used idiomatically in an open source project - reasonably looking code is (a bit?) more limited in the sneaky behavior it could include. We should still very much use those languages over C/C for infrastructure code because the much larger class of unintentional bugs is significantly mitigated, but I believe (without data to back it up) that even such "bugdoor" type changes will be harder to execute. However, given the sophistication in this case, it may not have helped at all. The attacker(s) have shown to be clever enough.
6. Sandboxing library code may have helped - as the attacker(s) explicitly disabled e.g. landlock, that might already have had some impact. We should create better tooling to make it much easier to link to infrastructure libraries in a sandboxed way (although that will have performance implications in many cases).
7. Automatic reproducible builds verification would have mitigated this particular vector of backdoor distribution, and the Debian team seems to be using the reproducibility advances of the last decade to verify/rebuild the build servers. We should build library and infrastructure code in a fully reproducible manner *and* automatically verify it, e.g. with added transparency logs for both source and binary artefacts. In general, it does however not prevent this kind of supply chain attack that directly targets source code at the "leaf" projects in Git commits.
8. Verifying the real-life identity of contributors to open source projects is hard and a difficult trade-off. Something similar to the #Debian #OpenPGP #web-of-trust would potentially have mitigated this style of attack somewhat, but with a different trade-off. We might have to think much harder about trust in individual accounts, and for some projects requiring a link to a real-world country-issued ID document may be the right balance (for others it wouldn't work). That is neither an easy nor a quick path, though. Also note that sophisticated nation state attackers will probably not have a problem procuring "good" fake IDs. It might still raise the bar, though.
9. What happened here seems clearly criminal - at least under my IANAL naive understanding of EU criminal law. There was clear intent to cause harm, and that makes the specific method less important. The legal system should also be able to help in mitigating supply chain attacks; not in preventing them, but in making them more costly if attackers can be tracked down (this is difficult in itself, see point 8) and face risk of punishment after the fact.
H/T @… @… @… @… @…
Integrating Blockchain technology within an Information Ecosystem
Francesco Salzano, Lodovica Marchesi, Remo Pareschi, Roberto Tonelli
https://arxiv.org/abs/2402.13191
This https://arxiv.org/abs/2312.08156 has been replaced.
initial toot: https://mastoxiv.page/@arXiv_csCR_…
A Survey of Explainable Knowledge Tracing
Yanhong Bai, Jiabao Zhao, Tingjiang Wei, Qing Cai, Liang He
https://arxiv.org/abs/2403.07279 https://
SoK: Trusting Self-Sovereign Identity
Evan Krul, Hye-young Paik, Sushmita Ruj, Salil S. Kanhere
https://arxiv.org/abs/2404.06729 https://
This https://arxiv.org/abs/2305.08642 has been replaced.
initial toot: https://mastoxiv.page/@arXiv_sta…
This https://arxiv.org/abs/2212.03218 has been replaced.
link: https://scholar.google.com/scholar?q=a
This https://arxiv.org/abs/2305.08642 has been replaced.
initial toot: https://mastoxiv.page/@arXiv_sta…
This https://arxiv.org/abs/2402.08322 has been replaced.
initial toot: https://mastoxiv.page/@arXiv_csCR_…
zk-IoT: Securing the Internet of Things with Zero-Knowledge Proofs on Blockchain Platforms
Gholamreza Ramezan, Ehsan Meamari
https://arxiv.org/abs/2402.08322
This https://arxiv.org/abs/2309.05769 has been replaced.
initial toot: https://mastoxiv.page/@arXiv_csCR_…